Skip to content

Conversation

jerryzh168
Copy link
Contributor

Summary:
att

Test Plan:
visual inspection

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Oct 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3206

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 17, 2025
@jerryzh168 jerryzh168 added topic: documentation Use this tag if this PR adds or improves documentation and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels Oct 17, 2025
@jerryzh168 jerryzh168 force-pushed the update-readme-10-2025 branch from b0fc829 to 081118f Compare October 17, 2025 22:25
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 17, 2025
from torchao.quantization import Int4WeightOnlyConfig, quantize_
quantize_(model, Int4WeightOnlyConfig(group_size=32, version=1))
```
Compared to a `torch.compiled` bf16 baseline, your quantized model should be significantly smaller and faster on a single A100 GPU:
Copy link
Contributor Author

@jerryzh168 jerryzh168 Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removing these since toy model memory/latency is not meaningful, to make our README shorter


TorchAO is integrated into some of the leading open-source libraries including:

* HuggingFace transformers with a [builtin inference backend](https://huggingface.co/docs/transformers/main/quantization/torchao) and [low bit optimizers](https://github.com/huggingface/transformers/pull/31865)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reordered a bit to put more commonly used ones earlier

@jerryzh168 jerryzh168 force-pushed the update-readme-10-2025 branch from 081118f to d45a249 Compare October 17, 2025 22:31
@jerryzh168 jerryzh168 force-pushed the update-readme-10-2025 branch from d45a249 to f7762ad Compare October 18, 2025 00:00
command instead::

pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu121
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be cu128 or cu129?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, I can change to 128

* Integration with [FBGEMM](https://github.com/pytorch/FBGEMM/tree/main/fbgemm_gpu/experimental/gen_ai) for SOTA kernels on server GPUs
* Integration with [ExecuTorch](https://github.com/pytorch/executorch/) for edge device deployment
* Axolotl for [QAT](https://docs.axolotl.ai/docs/qat.html) and [PTQ](https://docs.axolotl.ai/docs/quantize.html)
* TorchTitan for [float8 pre-training](https://github.com/pytorch/torchtitan/blob/main/docs/float8.md)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add unsloth too? Or are we still waiting for the blog post to link?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I was waiting for you to add this

### PyTorch-Native Training-to-Serving Model Optimization
- Pre-train Llama-3.1-70B **1.5x faster** with float8 training
- Recover **77% of quantized perplexity degradation** on Llama-3.2-3B with QAT
- Quantize Llama-3-8B to int4 for **1.89x faster** inference with **58% less memory**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you also update this Jerry?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I'm not sure about the latest, was planning for someone more familiar to update this

@jerryzh168 jerryzh168 force-pushed the update-readme-10-2025 branch from f7762ad to 6fdadba Compare October 20, 2025 18:13

.. code-block:: bash
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryzh168 Should we update this to use pytorch's vllm build: https://download.pytorch.org/whl/nightly/vllm ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK sure

@jerryzh168 jerryzh168 force-pushed the update-readme-10-2025 branch from 6fdadba to 8e4e53f Compare October 20, 2025 19:19
Summary:
att

Test Plan:
visual inspection

Reviewers:

Subscribers:

Tasks:

Tags:
@jerryzh168 jerryzh168 force-pushed the update-readme-10-2025 branch from 8e4e53f to 30f2dd8 Compare October 20, 2025 19:21
@jerryzh168 jerryzh168 merged commit 7e5d907 into main Oct 20, 2025
6 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: documentation Use this tag if this PR adds or improves documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants